作为一般类型的机器学习方法,人工神经网络已在许多模式识别和数据分析任务中建立了最先进的基准。在各种神经网络体系结构中,多项式神经网络(PNN)最近已证明可以通过神经切线核分析进行分析,并且在图像生成和面部识别方面尤其有效。但是,获得对PNNS的计算和样本复杂性的理论见解仍然是一个开放的问题。在本文中,我们将先前文献中的分析扩展到PNN,并获得有关PNNS样品复杂性的新结果,该结果在解释PNN的概括能力方面提供了一些见解。
translated by 谷歌翻译
Swarm learning (SL) is an emerging promising decentralized machine learning paradigm and has achieved high performance in clinical applications. SL solves the problem of a central structure in federated learning by combining edge computing and blockchain-based peer-to-peer network. While there are promising results in the assumption of the independent and identically distributed (IID) data across participants, SL suffers from performance degradation as the degree of the non-IID data increases. To address this problem, we propose a generative augmentation framework in swarm learning called SL-GAN, which augments the non-IID data by generating the synthetic data from participants. SL-GAN trains generators and discriminators locally, and periodically aggregation via a randomly elected coordinator in SL network. Under the standard assumptions, we theoretically prove the convergence of SL-GAN using stochastic approximations. Experimental results demonstrate that SL-GAN outperforms state-of-art methods on three real world clinical datasets including Tuberculosis, Leukemia, COVID-19.
translated by 谷歌翻译
自动化程序维修(APR)旨在自动修复源代码中的错误。最近,随着深度学习(DL)领域的进步,神经程序修复(NPR)研究的兴起,该研究将APR作为翻译任务从Buggy Code开始,以纠正代码并采用基于编码器decoder架构的神经网络。与其他APR技术相比,NPR方法在适用性方面具有很大的优势,因为它们不需要任何规范(即测试套件)。尽管NPR一直是一个热门的研究方向,但该领域还没有任何概述。为了帮助感兴趣的读者了解现有NPR系统的体系结构,挑战和相应的解决方案,我们对本文的最新研究进行了文献综述。我们首先介绍该领域的背景知识。接下来,要理解,我们将NPR过程分解为一系列模块,并在每个模块上阐述各种设计选择。此外,我们确定了一些挑战并讨论现有解决方案的影响。最后,我们得出结论,并为未来的研究提供了一些有希望的方向。
translated by 谷歌翻译
鉴于案件的事实,法律判断预测(LJP)涉及一系列的子任务,例如预测违规的法律文章,费用和罚款期限。我们建议利用LJP的统一文本到文本变压器,其中子任务之间的依赖关系可以自然地建立在自动回归解码器中。与以前的作品相比,它有三个优点:(1)它适合屏蔽语言模型的预先预订模式,从而可以从每个子任务的语义提示中受益,而不是将它们视为原子标签,(2)它使用单个统一的架构,在所有子任务中都可以实现完整参数共享,并且(3)它可以包含分类和生成子任务。我们展示了这款统一的变压器,尽管普通的域文本,但优于法律领域专门针对的预磨损模型。通过广泛的实验,我们发现捕获依赖性的最佳订单与人类直觉不同,而且人类最合理的逻辑顺序可以是模型的次优。我们还包括两个更多的辅助任务:法院视图生成和文章内容预测,显示它们不仅可以提高预测准确性,而且也可以为模型输出提供可解释的解释,即使在进行错误时也是模型输出。通过最佳配置,我们的模型优于先前的SOTA和一个单一任务版本的统一变压器,通过大边距。
translated by 谷歌翻译
代码摘要旨在为源代码生成简要的自然语言描述。由于源代码高度结构化并遵循严格的编程语言语法,它的抽象语法树(AST)通常会利用以通知编码器对结构信息。但是,AST通常比源代码长得多。目前的方法忽略尺寸限制并简单地将整个线性化AST送入编码器。为了解决这个问题,我们提出了AST变压器,以有效地编码树结构的AST。实验表明,AST变压器通过大量余量优于最先进的余量,同时能够减少编码过程中的计算复杂度的90亿美元。
translated by 谷歌翻译
自动推荐向特定法律案件的相关法律文章引起了很多关注,因为它可以大大释放人工劳动力,从而在大型法律数据库中寻找。然而,目前的研究只支持粗粒度推荐,其中所有相关文章都预测为整体,而无需解释每种文章与之相关的具体事实。由于一个案例可以由许多支持事实形成,因此遍历它们来验证推荐结果的正确性可能是耗时的。我们认为,在每个单一的事实和法律文章之间学习细粒度的对应,对于准确可靠的AI系统至关重要。通过这种动机,我们执行开创性的研究并创建一个手动注释的事实 - 文章的语料库。我们将学习视为文本匹配任务,并提出一个多级匹配网络来解决它。为了帮助模型更好地消化法律文章的内容,我们以随机森林的前提结论对形式解析物品。实验表明,解析的形式产生了更好的性能,结果模型超越了其他流行的文本匹配基线。此外,我们与先前的研究相比,并发现建立细粒度的事实 - 文章对应物可以通过大幅度提高建议准确性。我们最好的系统达到了96.3%的F1得分,使其具有实际使用潜力。它还可以显着提高法律决策预测的下游任务,将F1增加到12.7%。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译